143 research outputs found

    Brain Projects Think Big

    Get PDF

    Losing the battle but winning the war: game theoretic analysis of the competition between motoneurons innervating a skeletal muscle

    Get PDF
    The fibers in a skeletal muscle are divided into groups called “muscle units” whereby each muscle unit is innervated by a single neuron. It was found that neurons with low activation thresholds have smaller muscle units than neurons with higher activation thresholds. This results in a fixed recruitment order of muscle units, from smallest to largest, called the “size principle.” It is thought that the size principle results from a competitive process—taking place after birth—between the neurons innervating the muscle. The underlying mechanism of the competition was not understood. Moreover, the results in the majority of experiments that manipulated the activity during the competition period seemed to contradict the size principle. Experiments at the isolated muscle fibers showed that the competition is governed by a Hebbian-like rule, whereby neurons with low activation thresholds have a competitive advantage at any single muscle fiber. Thus neurons with low activation thresholds are expected to have larger muscle units in contradiction to what is seen empirically. This state of affairs was termed “paradoxical.” In the present study we developed a new game theoretic framework to analyze such competitive biological processes. In this game, neurons are the players competing to innervate a maximal number of muscle fibers. We showed that in order to innervate more muscle fibers, it is advantageous to win (as the neurons with higher activation thresholds do) later competitions. This both explains the size principle and resolves the seemingly paradoxical experimental data. Our model establishes that the competition at each muscle fiber may indeed be Hebbian and that the size principle still emerges from these competitions as an overall property of the system. Thus, the less active neurons “lose the battle but win the war.” Our model provides experimentally testable predictions. The new game-theoretic approach may be applied to competitions in other biological systems

    Sparsity-Based Super Resolution for SEM Images

    Full text link
    The scanning electron microscope (SEM) produces an image of a sample by scanning it with a focused beam of electrons. The electrons interact with the atoms in the sample, which emit secondary electrons that contain information about the surface topography and composition. The sample is scanned by the electron beam point by point, until an image of the surface is formed. Since its invention in 1942, SEMs have become paramount in the discovery and understanding of the nanometer world, and today it is extensively used for both research and in industry. In principle, SEMs can achieve resolution better than one nanometer. However, for many applications, working at sub-nanometer resolution implies an exceedingly large number of scanning points. For exactly this reason, the SEM diagnostics of microelectronic chips is performed either at high resolution (HR) over a small area or at low resolution (LR) while capturing a larger portion of the chip. Here, we employ sparse coding and dictionary learning to algorithmically enhance LR SEM images of microelectronic chips up to the level of the HR images acquired by slow SEM scans, while considerably reducing the noise. Our methodology consists of two steps: an offline stage of learning a joint dictionary from a sequence of LR and HR images of the same region in the chip, followed by a fast-online super-resolution step where the resolution of a new LR image is enhanced. We provide several examples with typical chips used in the microelectronics industry, as well as a statistical study on arbitrary images with characteristic structural features. Conceptually, our method works well when the images have similar characteristics. This work demonstrates that employing sparsity concepts can greatly improve the performance of SEM, thereby considerably increasing the scanning throughput without compromising on analysis quality and resolution.Comment: Final publication available at ACS Nano Letter

    Stability and flexibility of odor representations in the mouse olfactory bulb

    Get PDF
    Dynamic changes in sensory representations have been basic tenants of studies in neural coding and plasticity. In olfaction, relatively little is known about the dynamic range of changes in odor representations under different brain states and over time. Here, we used time-lapse in vivo two-photon calcium imaging to describe changes in odor representation by mitral cells, the output neurons of the mouse olfactory bulb. Using anesthetics as a gross manipulation to switch between different brain states (wakefulness and under anesthesia), we found that odor representations by mitral cells undergo significant re-shaping across states but not over time within state. Odor representations were well balanced across the population in the awake state yet highly diverse under anesthesia. To evaluate differences in odor representation across states, we used linear classifiers to decode odor identity in one state based on training data from the other state. Decoding across states resulted in nearly chance-level accuracy. In contrast, repeating the same procedure for data recorded within the same state but in different time points, showed that time had a rather minor impact on odor representations. Relative to the differences across states, odor representations remained stable over months. Thus, single mitral cells can change dynamically across states but maintain robust representations across months. These findings have implications for sensory coding and plasticity in the mammalian brain

    A Hierarchical Structure of Cortical Interneuron Electrical Diversity Revealed by Automated Statistical Analysis

    Get PDF
    Although the diversity of cortical interneuron electrical properties is well recognized, the number of distinct electrical types (e-types) is still a matter of debate. Recently, descriptions of interneuron variability were standardized by multiple laboratories on the basis of a subjective classification scheme as set out by the Petilla convention (Petilla Interneuron Nomenclature Group, PING). Here, we present a quantitative, statistical analysis of a database of nearly five hundred neurons manually annotated according to the PING nomenclature. For each cell, 38 features were extracted from responses to suprathreshold current stimuli and statistically analyzed to examine whether cortical interneurons subdivide into e-types. We showed that the partitioning into different e-types is indeed the major component of data variability. The analysis suggests refining the PING e-type classification to be hierarchical, whereby most variability is first captured within a coarse subpartition, and then subsequently divided into finer subpartitions. The coarse partition matches the well-known partitioning of interneurons into fast spiking and adapting cells. Finer subpartitions match the burst, continuous, and delayed subtypes. Additionally, our analysis enabled the ranking of features according to their ability to differentiate among e-types. We showed that our quantitative e-type assignment is more than 90% accurate and manages to catch several human error

    Effect of geometrical irregularities on propagation delay in axonal trees

    Get PDF
    Multiple successive geometrical inhomogeneities, such as extensive arborization and terminal varicosities, are usual characteristics of axons. Near such regions the velocity of the action potential (AP) changes. This study uses AXONTREE, a modeling tool developed in the companion paper for two purposes: (a) to gain insights into the consequence of these irregularities for the propagation delay along axons, and (b) to simulate the propagation of APs along a reconstructed axon from a cortical cell, taking into account information concerning the distribution of boutons (release sites) along such axons to estimate the distribution of arrival times of APs to the axons release sites. We used Hodgkin and Huxley (1952) like membrane properties at 20 degrees C. Focusing on the propagation delay which results from geometrical changes along the axon (and not from the actual diameters or length of the axon), the main results are: (a) the propagation delay at a region of a single geometrical change (a step change in axon diameter or a branch point) is in the order of a few tenths of a millisecond. This delay critically depends on the kinetics and the density of the excitable channels; (b) as a general rule, the lag imposed on the AP propagation at a region with a geometrical ratio GR greater than 1 is larger than the lead obtained at a region with a reciprocal of that GR value; (c) when the electronic distance between two successive geometrical changes (Xdis) is small, the delay is not the sum of the individual delays at each geometrical change, when isolated. When both geometrical changes are with GR greater than 1 or both with GR less than 1, this delay is supralinear (larger than the sum of individual delays). The two other combinations yield a sublinear delay; and (d) in a varicose axon, where the diameter changes frequently from thin to thick and back to thin, the propagation velocity may be slower than the velocity along a uniform axon with the thin diameter. Finally, we computed propagation delays along a morphologically characterized axon from layer V of the somatosensory cortex of the cat. This axon projects mainly to area 4 but also sends collaterals to areas 3b and 3a. The model predicts that, for this axon, areas 3a, 3b, and the proximal part of area 4 are activated approximately 2 ms before the activation of the distal part of area 4

    A Paradoxical Isopotentiality: A Spatially Uniform Noise Spectrum in Neocortical Pyramidal Cells

    Get PDF
    Membrane ion channels and synapses are among the most important computational elements of nerve cells. Both have stochastic components that are reflected in random fluctuations of the membrane potential. We measured the spectral characteristics of membrane voltage noise in vitro at the soma and the apical dendrite of layer 4/5 (L4/5) neocortical neurons of rats near the resting potential. We found a remarkable similarity between the voltage noise power spectra at the soma and the dendrites, despite a marked difference in their respective input impedances. At both sites, the noise levels and the input impedance are voltage dependent; in the soma, the noise level increased from σ = 0.33 ± 0.28 mV at 10 mV hyperpolarization from the resting potential to σ = 0.59 ± 0.3 at a depolarization of 10 mV. At the dendrite, the noise increased from σ = 0.34 ± 0.28 to σ = 0.56 ± 0.30 mV, respectively. TTX reduced both the input impedance and the voltage noise, and eliminated their voltage dependence at both locations. We describe a detailed compartmental model of a L4/5 neuron with simplified electrical properties that successfully reproduces the difference in input impedance between dendrites and soma and demonstrates that spatially uniform conductance-base noise sources leads to an apparent isopotential structure which exhibits a uniform power spectra of voltage noise at all locations. We speculate that a homogeneous distribution of noise sources insures that variability in synaptic amplitude as well as timing of action potentials is location invariant

    A Novel Multiple Objective Optimization Framework for Constraining Conductance-Based Neuron Models by Experimental Data

    Get PDF
    We present a novel framework for automatically constraining parameters of compartmental models of neurons, given a large set of experimentally measured responses of these neurons. In experiments, intrinsic noise gives rise to a large variability (e.g., in firing pattern) in the voltage responses to repetitions of the exact same input. Thus, the common approach of fitting models by attempting to perfectly replicate, point by point, a single chosen trace out of the spectrum of variable responses does not seem to do justice to the data. In addition, finding a single error function that faithfully characterizes the distance between two spiking traces is not a trivial pursuit. To address these issues, one can adopt a multiple objective optimization approach that allows the use of several error functions jointly. When more than one error function is available, the comparison between experimental voltage traces and model response can be performed on the basis of individual features of interest (e.g., spike rate, spike width). Each feature can be compared between model and experimental mean, in units of its experimental variability, thereby incorporating into the fitting this variability. We demonstrate the success of this approach, when used in conjunction with genetic algorithm optimization, in generating an excellent fit between model behavior and the firing pattern of two distinct electrical classes of cortical interneurons, accommodating and fast-spiking. We argue that the multiple, diverse models generated by this method could serve as the building blocks for the realistic simulation of large neuronal networks
    corecore